MIT professor delivers talk at Fanling Kau Yan College about self-regulated learning

Around 60 school principals, vice principals and English panel heads attend the talk delivered by Professor Philip Yu, fourth from right, front row, at the Fanling Kau Yan College on23 January 2024. They pose for a picture after the talk.

Professor Philip Yu Leung-ho talks to an audience composed of school principals, vice principals and English panel heads about the AI-assisted self-directed English learning.

Learning of a foreign language is never an easy task. As a starter, the command of grammar and syntax could be daunting. Learners also find it challenging to memorise vocabulary, and with limited vocabulary, it is hard for learners to continue a meaningful conversation in the foreign language. Pronouncing the words correctly pose another difficulty to learners, especially when they live in a community composed dominately of non-English speakers.

Professor Philip Yu Leung-ho, Associate Director of University Research Facility of Data Science and Artificial Intelligence, has long cherished the ambition to use AI technology to help people learn new foreign languages. “When information is presented both verbally and visually, learners can use both their verbal and nonverbal abilities to process the information, hence facilitating memorisation,” Professor Yu, the former head of the Department of Mathematics and Information Technology (MIT), said.

On 23 January 2024, the professor, expert in AI, and statistical and AI education, was invited to deliver a talk at the Fanling Kau Yan College about self-regulated learning as an activity to celebrate the school’s 20th anniversary. Professor Yu shared with the audience, which was made up of school principals, vice principals, and English panel heads from various secondary schools, how picture-cued writing tool can effectively facilitate students to learn English in a self-directed manner.

Numerous studies have shown that students learn more effectively through picture-cued tasks than a task without pictures. “By presenting a picture to students and asking them to describe the picture can enable them to learn through visual observation. Through the use of AI technology, we can fully exploit the advantage of picture-based learning so as to empower students to learn English in a self-directed manner,” Professor Yu told the audience.

 

 

To collect data for developing the AI-assisted learning tool, Professor Yu’s research team conduct an experiment with more than 200 mainland students. They were presented with different pictures and asked to observe each picture carefully, and write one sentence in English to describe the people, animals or things in the picture.

The AI-assisted learning tool requires the learners to write sentences to describe pictures taken from real-life situations. The pictures cover a variety of topics and themes, including the natural world, popular sports, everyday activities and so forth. Different pictures present different levels of content complexity to the learners.

 

Picture-cued writing can stimulate multisensory perception and interaction, which reinforces the effectiveness of learning.

 

“Picture-cued writing tasks have many advantages over text-based writing. Picture presents information in an intuitive, objective and usually more interesting manner, which are suitable for young writers with low levels of literacy skills. By prompting the learners to answer questions like colour, background, and actors in the picture, learners are further stimulated to use their intuition to learn. Picture-cued writing can stimulate multisensory perception and interaction, which reinforces the effectiveness of learning,” Professor Yu explained.

Not only does it increase students’ interest in learning, the automatic scoring function of the AI-assisted picture-cued writing tool also reduces teachers’ grading workload and shorten the time needed for providing feedback to learners. “Most of the existing tools are designed for text-based writing tasks, especially essay writing, and are often based on NLP algorithms. The tool developed by my team at EdUHK contains marking function that can evaluate the pictures and textual answers simultaneously,” Professor continued to explain in the talk.

The rapid development of multi-modal GenAI brings new possibilities in the area of language learning. By employing emerging multimodal GenAI, the learning system developed by Professor Yu has enormous potential in generating a wide range of image-based language learning activities that can cater to the varied learning needs of students. These include vocabulary learning, sentence generation, chatbot communication, and gamified activities.

Professor Yu’s talk is one of the celebration activities for the 20th anniversary of the Fanling Kau Yan College.

Professor Yu’s talk received very positive feedback from the audience. Teachers from several schools showed interest in receiving a trial of the AI-assisted learning tool developed by Professor Yu’s team for their students. Professor Yu also introduced the Master of Science in Artificial Intelligence and Educational Technology (MSc(AI&EdTech)) and Bachelor of Science (Honours) in Artificial Intelligence and Educational Technology (BSc(AI&EdTech)) Programmes offered by MIT to the audience. These programmes equip students with the knowledge and skills for designing a curriculum that is supported by AI and various educational technologies.